IEEE INFOCOM 2024

Session D-8

D-8: Backscatter Networking

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency D

TRIDENT: Interference Avoidance in Multi-reader Backscatter Network Via Frequency-space Division

Yang Zou (TsingHua University, China); Xin Na (Tsinghua University, China); Xiuzhen Guo (Zhejiang University, China); Yimiao Sun and Yuan He (Tsinghua University, China)

0
Backscatter is an enabling technology for battery-free sensing in industrial IoT applications. For the purpose of full coverage of numerous tags in the deployment area, one often needs to deploy multiple readers, each of which is to communicate with tags within its communication range. But the actual backscattered signals from a tag are likely to reach a reader outside its communication range, causing undesired interference. Conventional approaches for interference avoidance, either TDMA or CSMA based, separate the readers' media accesses in the time dimension and suffer from limited network throughput. In this paper, we propose TRIDENT, a novel backscatter tag design that enables interference avoidance with frequency-space division. By incorporating a tunable bandpass filter and multiple terminal loads, a TRIDENT tag is able to detect its channel condition and adaptively adjust the frequency band and the power of its backscattered signals, so that all the readers in the network can operate concurrently without being interfered. We implement TRIDENT and evaluate its performance under various settings. The results demonstrate that TRIDENT enhances the network throughput by 3.18×, compared to the TDMA based scheme.
Speaker
Speaker biography is not available.

ConcurScatter: Scalable Concurrent OFDM Backscatter Using Subcarrier Pattern Diversity

Caihui Du (Beijing Institute of Techology, China); Jihong Yu (Beijing Institute of Technology, China); Rongrong Zhang (Capital Normal University, China); Jianping An (Beijing Institute of Technology, China)

0
Ambient OFDM backscatter communication has attracted considerable research efforts. Yet the prior works focus on point-to-point backscatter from a single tag, leaving behind efficient backscatter networking of multiple tags. In this paper, we design and implement ConcurScatter, the first ambient OFDM backscatter system that scales to concurrent transmission of hundreds of tags. Our key innovation is building and using the subcarrier pattern diversity to distinguish concurrent tags. This would yield linear collision states rather than exponential ones in the prior works based on the IQ domain diversity, supporting more concurrent transmission. We concrete this by designing a suit of techniques including midair frequency synthesis that forms a unique subcarrier pattern for each concurrent tag, non-integer cyclic shift that contributes to support more concurrent tags, and subcarrier pattern reconstruction that creates virtual subcarriers to enable single-symbol parallel decoding. The testbed experiment confirms that ConcurScatter supports seven more concurrent tags with similar BER and 8.4x higher throughput than the point-to-point backscatter RapidRider. The large-scale simulation shows that ConcurScatter supports 200 tags which is 40x more than the state-of-the-art concurrent OFDM backscatter FreeCollision.
Speaker
Speaker biography is not available.

Efficient LTE Backscatter with Uncontrolled Ambient Traffic

Yifan Yang, Yunyun Feng and Wei Gong (University of Science and Technology of China, China); Yu Yang (City University of Hong Kong, Hong Kong)

0
Ambient LTE backscatter is a promising way to enable ubiquitous wireless communication with ultra-low power and cost. However, modulation in previous LTE backscatter systems relies heavily on the original data (content) of the signals.
They either demodulate tag data using an additional receiver to provide the content of the excitation or modulate on a few predefined reference signals in random ambient LTE traffic.
This paper presents CABLTE, a content-agnostic backscatter system that efficiently utilizes uncontrolled LTE PHY resources for backscatter communication using a single receiver. Our system is superior to prior work in two aspects: 1) Using one receiver to obtain tag data makes CABLTE more practical in real-world applications, and 2) Efficient modulation on LTE PYH resources improves the data rate of backscatter communication.
To obtain the tag data without knowing the ambient content, we design a checksum-based codeword translation method. We also propose a customized channel estimation scheme and a signal identification component in the backscatter system to ensure our accurate modulation and demodulation. Extensive experiments show that our CABLTE provides maximum tag throughput of 22 kbps, which is 3.67x higher than the content-agnostic system CAB and even 1.38x higher than the content-based system SyncLTE
Speaker
Speaker biography is not available.

Efficient Two-Way Edge Backscatter with Commodity Bluetooth

Maoran Jiang (University of Science and Technology of China, China); Xin Liu (The Ohio State University, USA); Li Dong (Macau University of Science and Technology, Macao); Wei Gong (University of Science and Technology of China, China)

0
Two-way backscatter is essential to general-purpose backscatter communication as it provides rich interaction to support diverse applications on commercial devices. However, existing Bluetooth backscatter systems suffer from unstable uplinks due to poor carrier-identification capability and inefficient downlinks caused by packet-length modulation. This paper proposes EffBlue, an efficient two-way backscatter design for commercial Bluetooth devices. EffBlue employs a simple edge backscatter server that alleviates the computational burden on the tag and helps build efficient uplinks and downlink. Specifically, efficient uplinks are designed by introducing an accurate synchronization scheme, which can effectively eliminate the use of non-compliant packets as carriers. To break the limitation of packet-level modulation, we design a new symbol-level WiFi-ASK downlink where the edge sends ASK-like WiFi signals and the tag can decode such signals using a simple envelope detector. We prototype the edge server using commodity WiFi and Bluetooth chips and build two-way backscatter tags with FPGAs. Experimental results show that EffBlue can identify the target excitations with more than 99% precision. Meanwhile, its WiFi-ASK downlink can achieve up to 124 kbps, which is 25x better than FreeRider.
Speaker
Speaker biography is not available.

Session Chair

Fernando A. Kuipers (Delft University of Technology, The Netherlands)

Enter Zoom
Session D-9

D-9: RFID and Wireless Charging

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency D

RF-Boundary: RFID-Based Virtual Boundary

Xiaoyu Li and Jia Liu (Nanjing University, China); Xuan Liu (Hunan University, China); Yanyan Wang (Hohai University, China); Shigeng Zhang (Central South University, China); Baoliu Ye and Lijun Chen (Nanjing University, China)

0
A boundary is a physical or virtual line that marks the edge or limit of a specific region, which has been widely used in many applications, such as autonomous driving, virtual wall, and robotic lawn mowers. However, none of existing work can well balance the cost, the deployability, and the scalability of a boundary. In this paper, we propose a new RFID-based boundary scheme together with its detection algorithm called RF-Boundary, which has the competitive advantages of being battery-free, low-cost, and easy-to-maintain. We develop two technologies of phase gradient and dual-antenna DOA to address the key challenges posed by RF-boundary, in terms of lack of calibration information and multi-edge interference. We implement a prototype of RF-Boundary with commercial RFID systems and a mobile robot. Extensive experiments verify the feasibility as well as the good performance of RF-Boundary.
Speaker
Speaker biography is not available.

Safety Guaranteed Power-Delivered-to-Load Maximization for Magnetic Wireless Power Transfer

Wangqiu Zhou, Xinyu Wang, Hao Zhou and ShenYao Jiang (University of Science and Technology of China, China); Zhi Liu (The University of Electro-Communications, Japan); Yusheng Ji (National Institute of Informatics, Japan)

0
Electromagnetic radiation (EMR) safety has always been a critical reason for hindering the development of magnetic-enabled wireless power transfer technology. People focus on the actual received energy at charging devices while paying attention to their health. Thus, we study this significant problem in this paper, and propose a universal safety guaranteed power-delivered-to-load (PDL) maximization scheme (called SafeGuard). Technically, we first utilize the off-the-shelf electromagnetic simulator to perform the EMR distribution analysis to ensure the universality of the method. Then, we innovatively introduce the concept of multiple importance sampling for achieving efficient EMR safety constraint extraction. Finally, we treat the proposed optimization problem as an optimal boundary point search problem from the perspective of space geometry, and devise a brand-new grid-based multi-constraint parallel processing algorithm to efficiently solve it. We implement a system prototype for SafeGuard, and conduct extensive experiments to evaluate it. The results indicate that our SafeGuard can obviously improve the achieved PDL by up to 1.75× compared with the state-of-the-art baseline while guaranteeing EMR safety. Furthermore, SafeGuard can accelerate the solution process by 29.12× compared with the traditional numerical method to satisfy the fast optimization requirement of wireless charging systems.
Speaker Junaid Ahmed Khan
Junaid Ahmed Khan is a PACCAR endowed Assistant Professor in Electrical and Computer Engineering at Western Washington University. Previously, he has been a research associate at the Center for Urban Science and Progress (CUSP) and Connected Cities with Smart Transportation (C2SMART) center at New York University from September 2019 to September 2020. He was also a research fellow at the DRONES and Smart City research clusters at the FedEx Institute of Technology, University of Memphis from January 2018 to August 2019. He has worked as a senior researcher at Inria Agora team, CITI lab at the National Institute of Applied Sciences (INSA), Lyon, France from October 2016 to January 2018. He has a Ph.D in Computer Science from Université Paris-Est, Marne-la-Vallée, France in November 2016. His research interests are Cyber Physical Systems with emphasis on Connected Autonomous Vehicles (CAVs) and Internet of Things (IoTs).

Dynamic Power Distribution Controlling for Directional Chargers

Yuzhuo Ma, Dié Wu and Jing Gao (Sichuan Normal University, China); Wen Sun (Northwestern Polytechnical University, China); Jilin Yang and Tang Liu (Sichuan Normal University, China)

0
Recently, deploying static chargers to construct timely and robust Wireless Rechargeable Sensor Networks (WRSNs) has become an important research issue for solving the limited energy problem of wireless sensor networks. However, the established fixed power distribution lacks flexibility in response to dynamic charging requests from sensors and may render some sensors to be continuously impacted by destructive wave interference. This results in a gap between energy supply and practical demand, making the charging process less efficient. In this paper, we focus on the real-time sensor charging requests and formulate a dynamic power dis\underline{T}ribut\underline{I}on controlling for \underline{D}irectional charg\underline{E}rs (TIDE) problem to maximize the overall charging utility. To solve the problem, we first build a charging model for directional chargers while considering wave interference and extract the candidate charging orientations from the continuous search space. Then we propose the neighbor set division method to narrow the scope of calculation. Finally, we design a dynamic power distribution controlling algorithm to update the neighbor sets timely and select optimal orientations for chargers. Our experimental results demonstrate the effectiveness and efficiency of the proposed scheme, it outperforms the comparison algorithms by 142.62\% on average.
Speaker Tianlong Li
Tianlong Li received the B.Eng. degree in electrical engineering from the Beijing Institute of Technology, Beijing, China, in 2019, where he is currently pursuing the Ph.D. degree with the School of Computer Science and Technology. His research interests include future Internet, information-centric network, and high-speed network processing.

LoMu: Enable Long-Range Multi-Target Backscatter Sensing for Low-Cost Tags

Yihao Liu, Jinyan Jiang and Jiliang Wang (Tsinghua University, China)

0
Backscatter sensing has shown great potential in the Internet of Things (IoT) and has attracted substantial research interest. We present LoMu, the first long-range multi-target backscatter sensing system for low-cost tags under ambient LoRa. LoMu analyzes the received low-SNR backscatter signals from different tags and calculates their phases to derive the motion information. The design of LoMu faces practical challenges including near-far interference between multiple tags, phase offsets induced by unsynchronized transceivers, and phase errors due to frequency drift in low-cost tags. We propose a conjugate-based energy concentration method to extract high-quality signals and a Hamming-window-based method to alleviate the near-far problem. We then leverage the relationship between the excitation signal and backscatter signals to synchronize TX and RX. Finally, we combine the double sidebands of backscatter signals to cancel the tag frequency drift. We implement LoMu and conduct extensive experiments to evaluate its performance. The results demonstrate that LoMu can accurately sense 35 tags at the same time. The average frequency sensing error is 0.7% at 400m, which is 4x distance of the state-of-the-art.
Speaker
Speaker biography is not available.

Session Chair

Filip Maksimovic (Inria, France)

Enter Zoom
Session D-10

D-10: High Speed Networking

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency D

Transparent Broadband VPN Gateway: Achieving 0.39 Tbps per Tunnel with Bump-in-the-Wire

Kenji Tanaka (NTT, Japan); Takashi Uchida and Yuki Matsuda (Fixstars, Japan); Yuki Arikawa (NTT, Japan); Shinya Kaji (Fixstars, Japan); Takeshi Sakamoto (NTT, Japan)

0
The demand for virtual private networks (VPNs) that provide confidentiality, integrity, and authenticity of communications is growing every year. IPsec is one of the oldest and most widely used VPN protocols, implemented between the internet protocol (IP) layer and the data link layer of the Linux kernel. This implementation method, known as bump-in-the-stack, has the advantage of being able to transparently apply IPsec to traffic without changing the application. However, its throughput efficiency (Gbps/core) is worse than regular Linux communication. Therefore, we chose the bump-in-the-wire (BITW) architecture, which handles IPsec in hardware separate from the host. Our proposed BITW architecture consists of inline cryptographic accelerators implemented in field-programmable gate arrays and a programmable switch that connects multiple such accelerators. The VPN gateway implemented with our architecture is transparent and improves throughput efficiency by 3.51 times and power efficiency by 3.40 times over a VPN gateway implemented in the Linux kernel. It also demonstrates excellent scalability, and has been confirmed to scale to a maximum of 386.24 Gbps per tunnel, exceeding state-of-the-art technology in maximum throughput and efficiency per tunnel. In multiple tunnels use cases, the proposed architecture improves the energy efficiency by 2.49 times.
Speaker
Speaker biography is not available.

Non-invasive performance prediction of high-speed softwarized network services with limited knowledge

Qiong Liu (Telecom Paris, Institute Polytechnique de Paris, France); Tianzhu Zhang (Nokia Bell Labs, France); Leonardo Linguaglossa (Telecom Paris, France)

0
Modern telco networks have experienced a significant paradigm shift in the past decade, thanks to the proliferation of network softwarization. Despite the benefits of softwarized networks, the constituent software data planes cannot always guarantee predictable performance due to resource contentions in the underlying shared infrastructure. Performance predictions are thus paramount for network operators to fulfilling Service-Level Agreements (SLAs), especially in high-speed regimes (e.g., Gigabit or Terabit Ethernet). Existing solutions heavily rely on in-band feature collection, which imposes non-trivial engineering and data-path overhead.

This paper proposes a non-invasive approach to data-plane performance prediction: our framework complements state-of-the-art solutions by measuring and analyzing low-level features ubiquitously available in the network infrastructure. Accessing these features does not hamper the packet data path. Our approach does not rely on prior knowledge of the input traffic, VNFs' internals, and system details.
We show that (i) low-level hardware features exposed by the NFV infrastructure can be collected and interpreted for performance issues, (ii) predictive models can be derived with classical ML algorithms, (iii) and can be used to predict performance impairments in real NFV systems accurately. Our code and datasets are publicly available.
Speaker
Speaker biography is not available.

BurstDetector: Real-Time and Accurate Across-Period Burst Detection in High-Speed Networks

Zhongyi Cheng, Guoju Gao, He Huang, Yu-e Sun and Yang Du (Soochow University, China); Haibo Wang (University of Kentucky, USA)

0
Traffic measurement provides essential information for various network services. Burst is a common phenomenon in high-speed network streams, which manifests as a surge in the number of a flow's incoming packets. We propose a new definition named across-period burst, considering the change not in two adjacent time windows but in two groups of windows with time continuity. The across-period burst definition can better capture the continuous changes of flows in high-speed networks. To achieve real-time burst detection with high accuracy and low memory consumption, we propose a novel sketch named BurstDetector, which consists of two stages. Stage 1 is to exclude those flows that will not become burst flows, while Stage 2 accurately records the information of the potential burst flows and carries out across-period burst detections at the end of every time window. We further propose an optimization called Hierarchical Cell, which can improve the memory utilization of BurstDetector. In addition, we analyze the estimation accuracy and time complexity of BurstDetector. Extensive experiments based on real-world datasets show that our BurstDetector can achieve at least 2.8 times as much detection accuracy and processing throughput as some existing algorithms.
Speaker
Speaker biography is not available.

NetFEC: In-network FEC Encoding Acceleration for Latency-sensitive Multimedia Applications

Yi Qiao, Han Zhang and Jilong Wang (Tsinghua University, China)

0
In face of packet loss, latency-sensitive multimedia applications cannot afford re-transmission because loss detection and re-transmission could lead to extra latency or otherwise compromised media quality. Alternatively, forward error correction (FEC) ensures reliability by adding redundancy and it is able to achieve lower latency at the cost of bandwidth and computational overheads. We propose to re-locate FEC encoding to hardware that better suits the computational pattern of FEC encoding than CPUs. In this paper, we present NetFEC, an in-network acceleration system that offloads the entire FEC encoding process on the emergent programmable switching ASICs, eliminating all CPU involvement. We design the ghost packet mechanism so that NetFEC can be compatible with important media transport functionalities, including congestion control, pacing and statistics. We integrate NetFEC with WebRTC and conduct extensive experiments with real hardwares. Our evaluations demonstrate that NetFEC is able to relieve server CPU burden and adds negligible overheads.
Speaker
Speaker biography is not available.

Session Chair

Baochun Li (University of Toronto, Canada)

Enter Zoom
Session D-11

D-11: Network Computing and Offloading

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency D

Analog In-Network Computing through Memristor-based Match-Compute Processing

Saad Saleh, Anouk S. Goossens, Sunny Shu and Tamalika Banerjee (University of Groningen, The Netherlands); Boris Koldehofe (TU Ilmenau, Germany)

0
Current network functions consume a significant amount of energy and lack the capacity to support more expressive learning models like neuromorphic functions. The major reason is the underlying transistor-based components that require continuous energy intensive data movements between the storage and computational units. In this research, we propose the use of a novel component, called Memristor,
which can colocalize computation and storage, and provide computational capabilities. Building on memristors, we propose the concept of match-compute processing for supporting energy-efficient network functions. Considering the analog processing of memristors, we propose a Probabilistic Content Addressable Memory (pCAM) abstraction which can provide analog match functions. pCAM provides deterministic and probabilistic outputs depending upon the closeness of match of incoming query with
the specified network policy. pCAM uses a crossbar array for line-rate matrix multiplications on the match outputs. We proposed a match-compute packet processing architecture and developed the programming abstractions for a baseline network function, i.e., Active Queue Management, which drops packets based upon the higher-order derivatives of sojourn times and buffer sizes. The analysis of match-compute processing over a physically fabricated memristor chip showed only 0.01 fJ/bit/cell of energy consumption, which is 50 times better than the match-action processing.
Speaker
Speaker biography is not available.

Carlo: Cross-Plane Collaboration for Multiple In-network Computing Applications

Xiaoquan Zhang, Lin Cui and WaiMing Lau (Jinan University, China); Fung Po Tso (Loughborough University, United Kingdom (Great Britain)); Yuhui Deng (Jinan University, China); Weijia Jia (Beijing Normal University (Zhuhai) and UIC, China)

0
In-network computing (INC) is a new paradigm that allows applications to be executed within the network, rather than on dedicated servers. Conventionally, INC applications have been exclusively deployed on the data plane (e.g., programmable ASICs), offering impressive performance capabilities. However, the data plane's efficiency is hindered by limited resources, which can prevent a comprehensive deployment of applications. On the other hand, offloading compute tasks to the control plane, which is underpinned by general-purpose servers with ample resources, provides greater flexibility. However, this approach comes with the trade-off of significantly reduced efficiency, especially when the system operates under heavy load. To simultaneously exploit the efficiency of data plane and the flexibility of control plane, we propose Carlo, a cross-plane collaborative optimization framework to support the network-wide deployment of multiple INC applications. Carlo first analyzes resource requirements of various INC applications across different planes. It then establishes mathematical models for resource allocation in cross-plane and automatically generates solutions using proposed algorithms. We have implemented the prototype of Carlo on Intel Tofino ASIC switches and DPDK. Experimental results demonstrate that Carlo can compute solutions in a short time while ensuring that the deployment scheme does not suffer from performance degradation.
Speaker
Speaker biography is not available.

TileSR: Accelerate On-Device Super-Resolution with Parallel Offloading in Tile Granularity

Ning Chen and Sheng Zhang (Nanjing University, China); Yu Liang (Nanjing Normal University, China); Jie Wu (Temple University, USA); Yu Chen, Yuting Yan, Zhuzhong Qian and Sanglu Lu (Nanjing University, China)

0
Recent years have witnessed the unprecedented performance of convolutional networks in image super-resolution (SR). SR involves upscaling a single low-resolution image to meet application-specific image quality demands, making it vital for mobile devices. However, the excessive computational and memory requirements of SR tasks pose a challenge in mapping SR networks on a single resource-constrained mobile device, especially for an ultra-high target resolution. This work presents TileSR, a novel framework for efficient image SR through tile-granular parallel offloading upon multiple collaborative mobile devices. In particular, for an incoming image, TileSR first uniformly divides it into multiple tiles and selects the top-$K$ tiles with the highest upscaling difficulty (quantified by mPV). Then, we propose a tile scheduling algorithm based on multi-agent multi-armed bandit, which attains the accurate offload reward through the exploration phase, derives the tile packing decision based on the reward estimates, and exploits this decision to schedule the selected tiles. We have implemented TileSR fully based on COTS hardware, and the experimental results demonstrate that TileSR reduces the response latency by 17.77-82.2\% while improving the image quality by 2.38-10.57\% compared to other alternatives.
Speaker
Speaker biography is not available.

SECO: Multi-Satellite Edge Computing Enabled Wide-Area and Real-Time Earth Observation Missions

Zhiwei Zhai (Sun Yat-Sen University, China); Liekang Zeng (Hong Kong University of Science and Technology (Guangzhou) & Sun Yat-Sen University, China); Tao Ouyang and Shuai Yu (Sun Yat-Sen University, China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China); Xu Chen (Sun Yat-sen University, China)

0
Rapid advances in low Earth orbit (LEO) satellite technology and satellite edge computing (SEC) have facilitated a key role for LEO satellites in enhanced Earth observation missions (EOM). These missions typically require multi-satellite cooperative observations of a large region of interest (RoI) area, as well as the observation image routing and computation processing, enabling accurate and real-time responsiveness. However, optimizing the resources of LEO satellite networks is nontrivial in the presence of its dynamic and heterogeneous properties. To this end, we propose SECO, a SEC-enabled framework that jointly optimizes multi-satellite observation scheduling, routing and computation node selection for enhanced EOM. Specifically, in the observation phase, we leverage the orbital motion and the rotatable onboard cameras of satellites, proposing a distributed game-based scheduling strategy to minimize the overall size of captured images while ensuring full (observation) coverage. In the sequent routing and computation phase, we first adopt image splitting technology to achieve parallel transmission and computation. Then, we propose an efficient iterative algorithm to jointly optimize image splitting, routing and computation node selection for each captured image. On this basis, we propose a theoretically guaranteed system-wide greedy-based strategy to reduce the total time cost over simultaneous processing for multiple images.
Speaker
Speaker biography is not available.

Session Chair

Yifei Zhu (Shanghai Jiao Tong University, China)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.